National Repository of Grey Literature 7 records found  Search took 0.01 seconds. 
Word2vec Models with Added Context Information
Šůstek, Martin ; Rozman, Jaroslav (referee) ; Zbořil, František (advisor)
This thesis is concerned with the explanation of the word2vec models. Even though word2vec was introduced recently (2013), many researchers have already tried to extend, understand or at least use the model because it provides surprisingly rich semantic information. This information is encoded in N-dim vector representation and can be recall by performing some operations over the algebra. As an addition, I suggest a model modifications in order to obtain different word representation. To achieve that, I use public picture datasets. This thesis also includes parts dedicated to word2vec extension based on convolution neural network.
Language Modelling for German
Tlustý, Marek ; Bojar, Ondřej (advisor) ; Hana, Jiří (referee)
The thesis deals with language modelling for German. The main concerns are the specifics of German language that are troublesome for standard n-gram models. First the statistical methods of language modelling are described and language phenomena of German are explained. Following that suggests own variants of n-gram language models with an aim to improve these problems. The models themselves are trained using the standard n-gram methods as well as using the method of maximum entropy with n-gram features. Both possibilities are compared using corelation metrics of hand-evaluated fluency of sentences and automatic evaluation - the perplexity. Also, the computation requirements are compared. Next, the thesis presents a set of own features that represent the count of grammatical errors of chosen phenomena. Success rate is verified on ability to predict the hand-evaluated fluency. Models of maximum entropy and own models that classify only using the medians of phenomena values computed from training data are used.
Syntax in methods for information retrieval
Straková, Jana
Title: Information Retrieval Using Syntax Information Author: Bc. Jana Kravalová Department: Institute of Formal and Applied Linguistics Supervisor: Mgr. Pavel Pecina, Ph.D. Supervisor's e-mail address: pecina@ufal.mff.cuni.cz Abstract: In the last years, application of language modeling in infor- mation retrieval has been studied quite extensively. Although language models of any type can be used with this approach, only traditional n-gram models based on surface word order have been employed and described in published experiments (often only unigram language models). The goal of this thesis is to design, implement, and evaluate (on Czech data) a method which would extend a language model with syntactic information, automatically obtained from documents and queries. We attempt to incorporate syntactic information into language models and experimentally compare this approach with uni- gram and bigram model based on surface word order. We also empirically compare methods for smoothing, stemming and lemmatization, effectiveness of using stopwords and pseudo relevance feedback. We perform a detailed ana- lysis of these retrieval methods and describe their performance in detail. Keywords: information retrieval, language modelling, depenency syntax, smo- othing
Word2vec Models with Added Context Information
Šůstek, Martin ; Rozman, Jaroslav (referee) ; Zbořil, František (advisor)
This thesis is concerned with the explanation of the word2vec models. Even though word2vec was introduced recently (2013), many researchers have already tried to extend, understand or at least use the model because it provides surprisingly rich semantic information. This information is encoded in N-dim vector representation and can be recall by performing some operations over the algebra. As an addition, I suggest a model modifications in order to obtain different word representation. To achieve that, I use public picture datasets. This thesis also includes parts dedicated to word2vec extension based on convolution neural network.
Language Modelling for German
Tlustý, Marek ; Bojar, Ondřej (advisor) ; Hana, Jiří (referee)
The thesis deals with language modelling for German. The main concerns are the specifics of German language that are troublesome for standard n-gram models. First the statistical methods of language modelling are described and language phenomena of German are explained. Following that suggests own variants of n-gram language models with an aim to improve these problems. The models themselves are trained using the standard n-gram methods as well as using the method of maximum entropy with n-gram features. Both possibilities are compared using corelation metrics of hand-evaluated fluency of sentences and automatic evaluation - the perplexity. Also, the computation requirements are compared. Next, the thesis presents a set of own features that represent the count of grammatical errors of chosen phenomena. Success rate is verified on ability to predict the hand-evaluated fluency. Models of maximum entropy and own models that classify only using the medians of phenomena values computed from training data are used.
Syntax in methods for information retrieval
Straková, Jana
Title: Information Retrieval Using Syntax Information Author: Bc. Jana Kravalová Department: Institute of Formal and Applied Linguistics Supervisor: Mgr. Pavel Pecina, Ph.D. Supervisor's e-mail address: pecina@ufal.mff.cuni.cz Abstract: In the last years, application of language modeling in infor- mation retrieval has been studied quite extensively. Although language models of any type can be used with this approach, only traditional n-gram models based on surface word order have been employed and described in published experiments (often only unigram language models). The goal of this thesis is to design, implement, and evaluate (on Czech data) a method which would extend a language model with syntactic information, automatically obtained from documents and queries. We attempt to incorporate syntactic information into language models and experimentally compare this approach with uni- gram and bigram model based on surface word order. We also empirically compare methods for smoothing, stemming and lemmatization, effectiveness of using stopwords and pseudo relevance feedback. We perform a detailed ana- lysis of these retrieval methods and describe their performance in detail. Keywords: information retrieval, language modelling, depenency syntax, smo- othing
ChatBot Based on Language Modelling
Radvanský, Matěj ; Szőke, Igor (referee) ; Skála, František (advisor)
This paper addresses the use of language modeling using neural networks in the chatbot. Problem is solved by using natural language processing and first step of generating response based on user input is input analysiss. As next beginings of sentences are created which are completed by output of neural network. All created sentences form final chatbot response. There was a comparisson with chatbot Cleverbot and measure of intelligence for both chatbots was determined. Based on testing results, some techniques for future progress were concluded.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.